PRP-Type Direct Search Methods for Unconstrained Optimization
نویسندگان
چکیده
منابع مشابه
A direct search method for smooth and nonsmooth unconstrained optimization
A derivative free frame based method for minimizing C1 and nonsmooth functions is described. A ‘black-box’ function is assumed with gradients being unavailable. The use of frames allows gradient estimates to be formed. At each iteration a ray search is performed either along a direct search quasi-Newton direction, or along the ray through the best frame point. The use of randomly oriented frame...
متن کاملNonmonotone curvilinear line search methods for unconstrained optimization
We present a new algorithmic framework for solving unconstrained minimization problems that incorporates a curvilinear linesearch. The search direction used in our framework is a combination of an approximate Newton direction and a direction of negative curvature. Global convergence to a stationary point where the Hessian matrix is positive semideenite is exhibited for this class of algorithms ...
متن کاملLine search methods with variable sample size for unconstrained optimization
Minimization of unconstrained objective function in the form of mathematical expectation is considered. Sample Average Approximation SAA method transforms the expectation objective function into a real-valued deterministic function using large sample and thus deals with deterministic function minimization. The main drawback of this approach is its cost. A large sample of the random variable tha...
متن کاملHybrid Probabilistic Search Methods for Simulation Optimization
Discrete-event simulation based optimization is the process of finding the optimum design of a stochastic system when the performance measure(s) could only be estimated via simulation. Randomness in simulation outputs often challenges the correct selection of the optimum. We propose an algorithm that merges Ranking and Selection procedures with a large class of random search methods for continu...
متن کاملNewton-type methods for unconstrained and linearly constrained optimization
This paper describes two numerically stable methods for unconstrained optimization and their generalization when linear inequality constraints are added. The difference between the two methods is simply that one requires the Hessian matrix explicitly and the other does not. The methods are intimately based on the recurrence of matrix factorizations and are linked to earlier work on quasi-Newton...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Applied Mathematics
سال: 2011
ISSN: 2152-7385,2152-7393
DOI: 10.4236/am.2011.26096